Model Checking Tools for Parallelizing Compilers
نویسندگان
چکیده
In this paper we apply temporal logic and model checking to analyze the structure of a source program represented as a process graph. The nodes of this graph are sequential processes whose computations are speciied as transition systems; the edges are dependence ((ow and control) relations between the computations at the nodes. This process graph is used as an intermediate source program representation by a parallelizing compiler. By labeling the nodes and the edges of the process graph with descriptive atomic propositions and by specifying the conditions necessary for optimizations and parallelizations as temporal logic formulas, we can use a model checker to locate nodes and sub-graphs of the process graph where particular optimizations can be made. We illustrate this technique by showing how a parallelizing compiler can determine if the iterations of an enumerated loop can be executed concurrently. To add or modify optimizations in this parallelizing compiler, we need only specify their conditions as temporal logic formulas. We do not need to add or modify compiler code. This greatly simpliies the process of locating optimization and parallelization opportunities in the source program and makes it easier to experiment with complex optimizations. Hence, this methodology provides a convenient, concise, and formal framework to carry out program optimizations by compilers under the control of programmers.
منابع مشابه
Model Checking as a Tool Used by Parallelizing Compilers
In this paper we describe the usage of temporal logic and model checking in a parallelizing compiler to analyze the structure of a source program and locate opportunities for optimization and parallelization. The source program is represented as a process graph in which the nodes are sequential processes and the edges are control and data dependence relationships between the computations at the...
متن کاملEstimating the Parallel Start-Up Overhead for Parallelizing Compilers
A technique for estimating the cost of executing a loop nest in parallel (parallel start-up overhead) is described in this paper. This technique is of utmost importance for parallelizing compilers which take decisions on the basis of predicting performance through the quantification of overheads. Such a model is analyzed and the necessary conditions for computing an estimate for the parallel st...
متن کاملEvaluation of Parallelizing Compilers
The recognition and exploitation of parallelism is a diicult problem for restructuring compilers. We present a method for evaluating the eeectiveness of parallelizing compilers in general and of speciic compiler techniques. We also report two groups of measurements that are the results of using this technique. One evaluates a commercially available parallelizer, KAP/Concurrent, and the other co...
متن کاملParallelization of NAS Benchmarks for Shared Memory Multiprocessore
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of port...
متن کاملParallelization of NAS Benchmarks for Shared Memory Multiprocessors
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of port...
متن کامل